19 research outputs found
Multi-digit Number Recognition from Street View Imagery using Deep Convolutional Neural Networks
Recognizing arbitrary multi-character text in unconstrained natural
photographs is a hard problem. In this paper, we address an equally hard
sub-problem in this domain viz. recognizing arbitrary multi-digit numbers from
Street View imagery. Traditional approaches to solve this problem typically
separate out the localization, segmentation, and recognition steps. In this
paper we propose a unified approach that integrates these three steps via the
use of a deep convolutional neural network that operates directly on the image
pixels. We employ the DistBelief implementation of deep neural networks in
order to train large, distributed neural networks on high quality images. We
find that the performance of this approach increases with the depth of the
convolutional network, with the best performance occurring in the deepest
architecture we trained, with eleven hidden layers. We evaluate this approach
on the publicly available SVHN dataset and achieve over accuracy in
recognizing complete street numbers. We show that on a per-digit recognition
task, we improve upon the state-of-the-art, achieving accuracy. We
also evaluate this approach on an even more challenging dataset generated from
Street View imagery containing several tens of millions of street number
annotations and achieve over accuracy. To further explore the
applicability of the proposed system to broader text recognition tasks, we
apply it to synthetic distorted text from reCAPTCHA. reCAPTCHA is one of the
most secure reverse turing tests that uses distorted text to distinguish humans
from bots. We report a accuracy on the hardest category of reCAPTCHA.
Our evaluations on both tasks indicate that at specific operating thresholds,
the performance of the proposed system is comparable to, and in some cases
exceeds, that of human operators
Using Simulation and Domain Adaptation to Improve Efficiency of Deep Robotic Grasping
Instrumenting and collecting annotated visual grasping datasets to train
modern machine learning algorithms can be extremely time-consuming and
expensive. An appealing alternative is to use off-the-shelf simulators to
render synthetic data for which ground-truth annotations are generated
automatically. Unfortunately, models trained purely on simulated data often
fail to generalize to the real world. We study how randomized simulated
environments and domain adaptation methods can be extended to train a grasping
system to grasp novel objects from raw monocular RGB images. We extensively
evaluate our approaches with a total of more than 25,000 physical test grasps,
studying a range of simulation conditions and domain adaptation methods,
including a novel extension of pixel-level domain adaptation that we term the
GraspGAN. We show that, by using synthetic data and domain adaptation, we are
able to reduce the number of real-world samples needed to achieve a given level
of performance by up to 50 times, using only randomly generated simulated
objects. We also show that by using only unlabeled real-world data and our
GraspGAN methodology, we obtain real-world grasping performance without any
real-world labels that is similar to that achieved with 939,777 labeled
real-world samples.Comment: 9 pages, 5 figures, 3 table